Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Computer ; 56(5):74-83, 2023.
Article in English | ProQuest Central | ID: covidwho-2294665

ABSTRACT

We present a case study of a black-box artificial intelligence-based COVID-19 detection product, GeNose C-19, developed by the Indonesian government. We find that explaining how GeNose works using functional analogies increases both Indonesian and American lay consumers' trust in GeNose.

2.
2022 IEEE Frontiers in Education Conference, FIE 2022 ; 2022-October, 2022.
Article in English | Scopus | ID: covidwho-2191750

ABSTRACT

To most students the internal machinations of the university are a black box, very rarely are they permitted to see behind the curtain. While in many areas academia has started to move away from the sage-on-the-stage mentality, much of what is done still does not involve the students' voice. While they have the opportunity to provide feedback on individual subjects, the structure of students' whole degrees are still the domain of the sage.At the University of Technology Sydney (UTS) we are reviewing our professional practice program for engineering. This program sees students complete professional experience activities such as internships, reflections and professional skill development in order to give students the opportunity to develop as professionals. While the program is well received by most stakeholders, it has remained largely the same for some time. Changes in the Higher Education sector, changing student needs and learning from the COVID-19 disruption have resulted in a review looking to redevelop the program.Typically a program review would be an opaque process for students if they were aware of it at all. However, UTS sought to bring students into the program development from an early stage. Engineering and IT students from any year of study were invited to apply to join a seven-week co-design studio over their Summer semester to reimagine professional practice at UTS. They were taken through the design thinking process to imagine a future program that meets the needs of all stakeholders. Students worked through empathising with past and current students, program academics, Work-Integrated Learning (WIL) experts, industry professionals and others they identified as important stakeholders. Additionally, the students completed independent research on context topics they identified as critical to understanding the space.The results of the project were that students identified three key foci for their program:•Supporting the development of a diverse student cohort•Improving the feedback loop between students, industry, and the University•Fostering connection(s) between the University and industryTo meet these aims the students proposed innovative solutions including a degree structure with an exit point for a lower qualification should a student not need the full qualification, and a flexi-points system to provide students access to a flexible professional development scheme tailored to each students' needs.Throughout the studio the students independently developed both insights and ideas that had previously been raised by the University and new insights and ideas that the University had not considered. They developed their design thinking, professional practice, complex problem solving skills, and expressed an appreciation for the chance to better understand how and why the University works behind the scenes. From the perspective of subject designers, the process and engagement of students rein vigorated the academics affected by a long COVID-19 disruption that had seen diminished engagement from students.This process significantly benefited all involved through the development of skills and knowledge in students, the reinvigoration of academic staff, and the development of confirmatory and new insights and ideas for the University. This innovative practice will be broadened and continued at UTS and the co-design processes it supported as the norm rather than the exception when redeveloping course content and program structures. © 2022 IEEE.

3.
24th International Conference on Human-Computer Interaction, HCII 2022 ; 13518 LNCS:441-460, 2022.
Article in English | Scopus | ID: covidwho-2173820

ABSTRACT

This paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them ("black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people's perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself. © 2022, The Author(s).

4.
3rd International Conference on Intelligent Computing, Instrumentation and Control Technologies, ICICICT 2022 ; : 528-531, 2022.
Article in English | Scopus | ID: covidwho-2136255

ABSTRACT

In the field of medical science, the reliability of the results produced by deep learning classifiers on disease diagnosis plays a crucial role. The reliability of the classifier substantially reduces by the presence of adversarial examples. The adversarial examples mislead the classifiers to give wrong prediction with equal or more confidence than the actual prediction. The adversarial attacks in the black box type is done by creating a pseudo model that resembles the target model. From the pseudo model, the attack is created and is transferred to the target model. In this work, the Fast Gradient Sign Method and its variants Momentum Iterative Fast Gradient Sign Method, Projected Gradient Descent and Basic Iterative Method are used to create adversarial examples on a target VGG-16 model. The datasets used are Diabetic Retinopathy 2015 Data Colored Resized and SARS-CoV-2 CT Scan Dataset. The experimentation revealed that the transferability of attack is true for the above described attack methods on a VGG-16 model. Also, the Projected Gradient Descent attack provides a higher success in attack in comparison with the other methods experimented in this work. © 2022 IEEE.

5.
3rd Workshop on Intelligent Data - From Data to Knowledge, DOING 2022, 1st Workshop on Knowledge Graphs Analysis on a Large Scale, K-GALS 2022, 4th Workshop on Modern Approaches in Data Engineering and Information System Design, MADEISD 2022, 2nd Workshop on Advanced Data Systems Management, Engineering, and Analytics, MegaData 2022, 2nd Workshop on Semantic Web and Ontology Design for Cultural Heritage, SWODCH 2022 and Doctoral Consortium which accompanied 26th European Conference on Advances in Databases and Information Systems, ADBIS 2022 ; 1652 CCIS:14-23, 2022.
Article in English | Scopus | ID: covidwho-2048129

ABSTRACT

During the COVID-19 pandemic, the misinformation problem arose again through social networks, like a harmful health advice and false solutions epidemic. In Brazil, one of the primary sources of misinformation is the messaging application WhatsApp. Thus, the automatic misinformation detection (MID) about COVID-19 in Brazilian Portuguese WhatsApp messages becomes a crucial challenge. Recently, some works presented different MID approaches for this purpose. Despite this success, most explored MID models remain complex black boxes. So, their internal logic and inner workings are hidden from users, which cannot fully understand why a MID model assessed a particular WhatsApp message as misinformation or not. Thus, in this article, we explore a post-hoc interpretability method called LIME to explain the predictions of MID approaches. Besides, we apply a textual analysis tool called LIWC to analyze WhatsApp messages’ linguistic characteristics and identify psychological aspects present in misinformation and non-misinformation messages. The results indicate that it is feasible to understand relevant aspects of the MID model’s predictions and find patterns on WhatsApp messages about COVID19. So, we hope that these findings help to understand the misinformation phenomena about COVID-19 in WhatsApp messages. © 2022, Springer Nature Switzerland AG.

6.
2022 Genetic and Evolutionary Computation Conference, GECCO 2022 ; : 1763-1769, 2022.
Article in English | Scopus | ID: covidwho-2020380

ABSTRACT

Since the first wave of the COVID-19 pandemic, governments have applied restrictions in order to slow down its spreading. However, creating such policies is hard, especially because the government needs to trade-off the spreading of the pandemic with the economic losses. For this reason, several works have applied machine learning techniques, often with the help of special-purpose simulators, to generate policies that were more effective than the ones obtained by governments. While the performance of such approaches are promising, they suffer from a fundamental issue: since such approaches are based on black-box machine learning, their real-world applicability is limited, because these policies cannot be analyzed, nor tested, and thus they are not trustable. In this work, we employ a recently developed hybrid approach, which combines reinforcement learning with evolutionary computation, for the generation of interpretable policies for containing the pandemic. These policies, trained on an existing simulator, aim to reduce the spreading of the pandemic while minimizing the economic losses. Our results show that our approach is able to find solutions that are extremely simple, yet very powerful. In fact, our approach has significantly better performance (in simulated scenarios) than both previous work and government policies. © 2022 ACM.

7.
35th Conference on Neural Information Processing Systems, NeurIPS 2021 ; 6:4699-4711, 2021.
Article in English | Scopus | ID: covidwho-1897540

ABSTRACT

Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19. Source code is available at neural-additive-models.github.io. © 2021 Neural information processing systems foundation. All rights reserved.

8.
2nd International Conference on Information Technology and Education, ICIT and E 2022 ; : 1-4, 2022.
Article in English | Scopus | ID: covidwho-1861109

ABSTRACT

This research aims to develop software that can detect faces that use masks, do not use masks, and use masks in incorrect positions with the TensorFlow library applying the Convolutional Neural Network (CNN) method. The software development method used is the Incremental Method. Incremental one focuses on building a CNN model using the evaluation of the confusion matrix and incremental two focuses on developing a software display using black box testing. The result in incremental one is a CNN model with a confusion matrix evaluation resulting in a model that has 98.83% accuracy, 98.84% precision, 98.78% recall, and 98.81% fl-score. The second incremental result is the display of the software that has been black box tested and is ready to be used for detection. The final result of this research is software that can detect human face objects using masks, not using masks, and using masks in the incorrect position © 2022 IEEE.

9.
2021 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2021 ; : 194-201, 2021.
Article in English | Scopus | ID: covidwho-1832576

ABSTRACT

BERT models are currently state-of-the-art solutions for various tasks, including stance classification. However, these models are a black box for their users. Some proposals have leveraged the weights assigned by the internal attention mechanisms of these models for interpretability purposes. However, whether the attention weights help the interpretability of the model is still a matter of debate, with positions in favor and against. This work proposes an attention-based interpretability mechanism to identify the most influential words for stances predicted using BERT-based models. We target stances expressed in Twitter using the Portuguese language and assess the proposed mechanism using a case study regarding stances on COVID-19 vaccination in the Brazilian context. The interpretation mechanism traces tokens' attentions back to words, assigning a newly proposed metric referred to as absolute word attention. Through this metric, we assess several aspects to determine if we can find important words for the classification and with meaning for the domain. We developed a broad experimental setting that involved three datasets with tweets in Brazilian Portuguese and three BERT models with support for this language. Our results are encouraging, as we were able to identify 52-82% of words with high absolute attention contributing positively to stance classification. The interpretability mechanism proved to be helpful to understand the influence of words in the classification, and they revealed intrinsic properties of the domain and representative arguments of the stances. © 2021 ACM.

10.
24th International Conference on Computer and Information Technology, ICCIT 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1714044

ABSTRACT

Covid 19 continues to have a catastrpoic effect on the world, causing terrible spots to appear all over the place. Due to global epidemics and doctor and healthcare personel shortages, developing an AI-based system to detect COVID in a timely and cost-effective method has become a requirement. It is also essential to detect covid from chest X-ray and CT radiographs due to their accuracy in detecting lung infection and as well as to understand the severity. Moreover, though the number of infected people around the globe is enormous, the amount of covid data set to build an AI system is scarce and scattered. In this letter, we presented a Chest CT scan data (HRCT) set for Covid and healthy patients considering a varying range of severity of COVID, which we published on kaggle, that can assist other researchers to contribute to healthcare AI. We also developed three deep learning approaches for detecting covid quickly and cheaply. Our three transfer learning-based approaches, Inception v3, Resnet 50, and VGG16, achieve accuracy of 99.8%, 91.3%, and 99.3%, respectively on unseen data. We delve deeper into the black boxes of those models to demonstrate how our model comes to a certain conclusion, and we found that, despite the low accuracy of the model based on VGG16, it detects the covid spot of images well, which we believe may further assist doctors in visualizing which regions are affected. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL